首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8篇
  免费   0篇
综合类   2篇
无线电   5篇
自动化技术   1篇
  2021年   2篇
  2020年   1篇
  2019年   2篇
  2018年   1篇
  2017年   1篇
  2009年   1篇
排序方式: 共有8条查询结果,搜索用时 718 毫秒
1
1.
Applications related to game technology, law-enforcement, security, medicine or biometrics are becoming increasingly important, which, combined with the proliferation of three-dimensional (3D) scanning hardware, have made that 3D face recognition is now becoming a promising and feasible alternative to two-dimensional (2D) face methods. The main advantage of 3D data, when compared with traditional 2D approaches, is that it provides information that is invariant to rigid geometric transformations and to pose and illumination conditions. One key element for any 3D face recognition system is the modeling of the available scanned data. This paper presents new 3D models for facial surface representation and evaluates them using two matching approaches: one based on support vector machines and another one on principal component analysis (with a Euclidean classifier). Also, two types of environments were tested in order to check the robustness of the proposed models: a controlled environment with respect to facial conditions (i.e. expressions, face rotations, etc.) and a non-controlled one (presenting face rotations and pronounced facial expressions). The recognition rates obtained using reduced spatial resolution representations (a 77.86% for non-controlled environments and a 90.16% for controlled environments, respectively) show that the proposed models can be effectively used for practical face recognition applications.  相似文献   
2.
In this paper, under different illuminations and random noises, focusing on the local texture feature’s defects of a face image that cannot be completely described because the threshold of local ternary pattern (LTP) cannot be calculated adaptively, a local three-value model of improved adaptive local ternary pattern (IALTP) is proposed. Firstly, the difference function between the center pixel and the neighborhood pixel weight is established to obtain the statistical characteristics of the central pixel and the neighborhood pixel. Secondly, the adaptively gradient descent iterative function is established to calculate the difference coefficient which is defined to be the threshold of the IALTP operator. Finally, the mean and standard deviation of the pixel weight of the local region are used as the coding mode of IALTP. In order to reflect the overall properties of the face and reduce the dimension of features, the two-directional two-dimensional PCA ((2D)2PCA) is adopted. The IALTP is used to extract local texture features of eyes and mouth area. After combining the global features and local features, the fusion features (IALTP+) are obtained. The experimental results on the Extended Yale B and AR standard face databases indicate that under different illuminations and random noises, the algorithm proposed in this paper is more robust than others, and the feature’s dimension is smaller. The shortest running time reaches 0.329 6 s, and the highest recognition rate reaches 97.39%.  相似文献   
3.
The current mainstream methods of loop closure detection in visual simultaneous localization and mapping (SLAM) are based on bag-of-words (BoW). However, traditional BoW-based approaches are strongly affected by changes in the appearance of the scene, which leads to poor robustness and low precision. In order to improve the precision and robustness of loop closure detection, a novel approach based on stacked assorted auto-encoder (SAAE) is proposed. The traditional stacked auto-encoder is made up of multiple layers of the same autoencoder. Compared with the visual BoW model, although it can better extract the features of the scene image, the output feature dimension is high. The proposed SAAE is composed of multiple layers of denoising auto-encoder, convolutional auto-encoder and sparse auto-encoder, it uses denoising auto-encoder to improve the robustness of image features, convolutional auto-encoder to preserve the spatial information of the image, and sparse auto-encoder to reduce the dimensionality of image features. It is capable of extracting low to high dimensional features of the scene image and preserving the spatial local characteristics of the image, which makes the output features more robust. The performance of SAAE is evaluated by a comparison study using data from new college dataset and city centre dataset. The methodology proposed in this paper can effectively improve the precision and robustness of loop closure detection in visual SLAM.  相似文献   
4.
针对传统的视觉里程计算法在动态环境下存在位姿估计精度不高且鲁棒性较差的问题,提出一种融合边缘信息的稠密视觉里程计算法.首先,使用深度信息计算像素点的空间坐标,并采用K-means算法进行场景聚类.分别基于光度信息与边缘信息的聚类构建出光度及几何一致性误差与边缘对齐误差,两者结合并进行正则化后得到数据融合的残差模型.将平均背景深度引入到残差模型中,用以扩大动、静部分残差差距而有利于正确的运动分割.然后,根据聚类残差分布的普遍特征,构建运动似然的非参数统计模型,通过动态阈值进行运动分割,剔除动态物体并得到聚类权重.最后,将加权聚类残差加入到位姿估计的非线性优化函数中,以降低动态物体的影响,提高位姿估计的精度.在TUM数据集上进行实验,结果表明本文算法在静态环境下以及富有挑战性的高动态环境下都能取得较好的结果,在动态环境下比现有算法具有更高的精度与鲁棒性.  相似文献   
5.
This paper presents a scheme for feature extraction that can be applied for classification of corals in submarine coral reef images. In coral reef image classification, texture features are extracted using the proposed Improved Local Derivative Pattern (ILDP). ILDP determines diagonal directional pattern features based on local derivative variations which can capture full information. For classification, three classifiers, namely Convolutional Neural Network (CNN), K-Nearest Neighbor (KNN) with four distance metrices, namely Euclidean distance, Manhattan distance, Canberra distance and Chi-Square distance, and Support Vector Machine (SVM) with three kernel functions, namely Polynomial, Radial basis function, Sigmoid kernel are used. The accuracy of the proposed method is compared with Local Binary pattern (LBP), Local Tetra Pattern (LTrP), Local Derivative Pattern (LDP) and Robust Local Ternary Pattern (RLTP) on five coral data sets and four texture data sets. Experimental results indicate that ILDP feature extraction method when tested with five coral data sets, namely EILAT, RSMAS, EILAT2, MLC2012 and SDMRI and four texture data sets, namely KTH-TIPS, UIUCTEX, CURET and LAVA achieves the highest overall classification accuracy, minimum execution time when compared to the other methods.  相似文献   
6.
In this paper, a visual focus of attention(VFOA) detection method based on the improved hybrid incremental dynamic Bayesian network(IHIDBN) constructed with the fusion of head, gaze and prediction sub-models is proposed aiming at solving the problem of the complexity and uncertainty in dynamic scenes. Firstly, gaze detection sub-model is improved based on the traditional human eye model to enhance the recognition rate and robustness for different subjects which are detected. Secondly, the related sub-models are described, and conditional probability is used to establish regression models respectively. Also an incremental learning method is used to dynamically update the parameters to improve adaptability of this model. The method has been evaluated on two public datasets and daily exper iments. The results show that the method proposed in this paper can effectively estimate VFOA from user, and it is robust to the free deflection of the head and distance change.  相似文献   
7.
The information of expression texture extracted by the completed local ternary patterns (CLTP) method is not accurate enough, which may cause low recognition rate. Therefore, an improved completed local ternary patterns (ICLTP) is proposed here. Firstly, the Scharr operator is used to calculate gradient magnitudes of images to enhance the detail of texture, which is beneficial to obtaining more accurate expression features. Secondly, two different neighborhoods of CLTP features are combined to obtain much information of facial expression. Finally, K nearest neighbor (KNN) and sparse representation classifier (SRC) are combined for classification and a 10-fold cross-validation method is tested in the JAFFE and CK+ databases. The results show that the ICLTP method can improve the recognition rate of facial expression and reduce the confusion between various expressions. Especially, the misrecognition rate of other six expressions recognized as neutral is reduced in the 7-class expression recognition.  相似文献   
8.
V-SLAM中点云配准算法改进及移动机器人实验   总被引:2,自引:1,他引:1  
针对移动机器人视觉同时定位与地图构建(visual simultaneous location and mapping,V-SLAM)中,存在帧间配准误差大造成重建精度低、位姿轨迹丢失的问题,提出一种三阶段改进点云配准的ICP算法.首先通过RANSAC(随机采样一致性)采样策略对RGB图进行点对的筛选从而获得内点完成预处理;然后采用基于刚体变换一致性的对应点双重距离阈值法完成点云初配准;在得到良好的初始位姿下,引入一种动态迭代角度因子的ICP精配准方法.在后端部分引入滑动窗口法和随机采样法相结合的关键帧筛选机制,结合g2o(general graph optimization)图优化算法优化机器人位姿轨迹,实现全局一致的VSLAM系统.采用标准点云模型对本文算法与文献算法进行点云配准实验对比,在配准精度上有明显提高;通过移动机器人在真实环境下的地图重建实验,验证了本文算法的有效性;最后基于TUM数据集的实验表明了本文算法能有效估计出机器人运行轨迹.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号